Researchers: Generative AI Might Make It Easier to Target Reporters
2023-07-05
LRC
TXT
大字
小字
滚动
全页
1The artificial intelligence (AI) tool ChatGPT was released last year.
2Since then, the possibility that artificial intelligence might take over the world has worried people more than ever.
3A new report from New York University's Stern Center for Business and Human Rights identifies eight risks of generative AI.
4Some of those risks especially concern reporters and news media organizations.
5Disinformation, computer attacks, privacy violations, and the weakening of news media are among the risks the team reports.
6Stern Center assistant director Paul Barnett was a co-writer of the report.
7He told VOA that people are confused about what risks AI presents now and in the future.
8Barnett said: "We shouldn't get paralyzed by the question of, 'Oh my God, will this technology lead to killer robots that are going to destroy humanity?'"
9The systems being released right now are not going to lead to the extreme danger in the future some worry about, explained Barrett explained.
10The report urges lawmakers to face some of the existing problems with AI.
11Among the biggest concerns with AI are the dangers it presents for reporters and activists.
12The report says AI makes it much easier to dox reporters online.
13Doxxing is when a person's private information, like their address, is posted publicly.
14Disinformation is another problem, as AI makes it easier to create propaganda.
15The report noted Russia's involvement in the 2016 U.S. presidential election.
16It said use of AI could have widened and deepened Russia's interference with the process.
17Barrett said AI "is going to be a huge engine of efficiency, but it's also going to make much more efficient the production of disinformation."
18Disinformation could also be dangerous for news reporters because it could lead the public to trust them less.
19And AI could worsen financial problems for news media groups.
20People are less likely to seek news reports, the researchers say, because they can seek answers from ChatGPT instead.
21The report says that could shrink traffic on news sites causing losses in their advertising revenue.
22However, AI could also be helpful for the news industry.
23The technology can examine data, fact-check sources, and produce headlines speedily.
24The report urges the government to supervise AI companies more in the future.
25"Congress, regulators, the public - and the industry, for that matter, need to pay attention to the immediate potential risks," Barrett said.
26I'm Caty Weaver.
1The artificial intelligence (AI) tool ChatGPT was released last year. Since then, the possibility that artificial intelligence might take over the world has worried people more than ever. 2A new report from New York University's Stern Center for Business and Human Rights identifies eight risks of generative AI. Some of those risks especially concern reporters and news media organizations. 3Disinformation, computer attacks, privacy violations, and the weakening of news media are among the risks the team reports. 4Stern Center assistant director Paul Barnett was a co-writer of the report. He told VOA that people are confused about what risks AI presents now and in the future. 5Barnett said: "We shouldn't get paralyzed by the question of, 'Oh my God, will this technology lead to killer robots that are going to destroy humanity?'" 6The systems being released right now are not going to lead to the extreme danger in the future some worry about, explained Barrett explained. The report urges lawmakers to face some of the existing problems with AI. 7Among the biggest concerns with AI are the dangers it presents for reporters and activists. 8The report says AI makes it much easier to dox reporters online. Doxxing is when a person's private information, like their address, is posted publicly. 9Disinformation is another problem, as AI makes it easier to create propaganda. The report noted Russia's involvement in the 2016 U.S. presidential election. It said use of AI could have widened and deepened Russia's interference with the process. 10Barrett said AI "is going to be a huge engine of efficiency, but it's also going to make much more efficient the production of disinformation." 11Disinformation could also be dangerous for news reporters because it could lead the public to trust them less. 12And AI could worsen financial problems for news media groups. People are less likely to seek news reports, the researchers say, because they can seek answers from ChatGPT instead. The report says that could shrink traffic on news sites causing losses in their advertising revenue. 13However, AI could also be helpful for the news industry. The technology can examine data, fact-check sources, and produce headlines speedily. 14The report urges the government to supervise AI companies more in the future. 15"Congress, regulators, the public - and the industry, for that matter, need to pay attention to the immediate potential risks," Barrett said. 16I'm Caty Weaver. 17The Associated Press reported this story. Dominic Varela adapted it for VOA Learning English. 18____________________________________________________________ 19Words in This Story 20confuse - v. unable to think clearly or understand 21paralyzed -v. unable to move or act 22efficient - adj. capable of producing desired results with little or no waste (as of time or materials) 23regulator - n. a person or body that supervises a particular industry or business activity 24potential - adj. existing in possibility; capable of development into actuality 25____________________________________________________________ 26What do you think of this story? 27We want to hear from you. We have a new comment system. Here is how it works: 28Each time you return to comment on the Learning English site, you can use your account and see your comments and replies to them. Our comment policy is here.